Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Existing literature on information sharing in contests has established that sharing contest-specific information influences contestant behaviors, and thereby, the outcomes of a contest. However, in the context of engineering design contests, there is a gap in knowledge about how contest-specific information such as competitors’ historical performance influences designers’ actions and the resulting design outcomes. To address this gap, the objective of this study is to quantify the influence of information about competitors’ past performance on designers’ belief about the outcomes of a contest, which influences their design decisions, and the resulting design outcomes. We focus on a single-stage design competition where an objective figure of merit is available to the contestants for assessing the performance of their design. Our approach includes (i) developing a behavioral model of sequential decision making that accounts for information about competitors’ historical performance and (ii) using the model in conjunction with a human-subject experiment where participants make design decisions given controlled strong or weak performance records of past competitors. Our results indicate that participants spend greater efforts when they know that the contest history reflects that past competitors had a strong performance record than when it reflects a weak performance record. Moreover, we quantify cognitive underpinnings of such informational influence via our model parameters. Based on the parametric inferences about participants’ cognition, we suggest that contest designers are better off not providing historical performance records if past contest outcomes do not match their expectations setup for a given design contest.more » « less
-
Abstract Extracting an individual's scientific knowledge is essential for improving educational assessment and understanding cognitive tasks in engineering activities such as reasoning and decision making. However, knowledge extraction is an almost impossible endeavor if the domain of knowledge and the available observational data are unrestricted. The objective of this paper is to quantify individuals' theory-based causal knowledge from their responses to given questions. Our approach uses directed acyclic graphs (DAGs) to represent causal knowledge for a given theory and a graph-based logistic model that maps individuals' question-specific subgraphs to question responses. We follow a hierarchical Bayesian approach to estimate individuals' DAGs from observations.The method is illustrated using 205 engineering students' responses to questions on fatigue analysis in mechanical parts. In our results, we demonstrate how the developed methodology provides estimates of population-level DAG and DAGs for individual students. This dual representation is essential for remediation since it allows us to identify parts of a theory that a population or individual struggles with and parts they have already mastered. An addendum of the method is that it enables predictions about individuals' responses to new questions based on the inferred individual-specific DAGs. The latter has implications for the descriptive modeling of human problem-solving, a critical ingredient in sociotechnical systems modeling.more » « less
-
ABSTRACT Modern engineered systems, and learning‐based systems, in particular, provide unprecedented complexity that requires advancement in our methods to achieve confidence in mission success through test and evaluation (T&E). We define learning‐based systems as engineered systems that incorporate a learning algorithm (artificial intelligence) component of the overall system. A part of the unparalleled complexity is the rate at which learning‐based systems change over traditional engineered systems. Where traditional systems are expected to steadily decline (change) in performance due to time (aging), learning‐based systems undergo a constant change which must be better understood to achieve high confidence in mission success. To this end, we propose pairing Bayesian methods with systems theory to quantify changes in operational conditions, changes in adversarial actions, resultant changes in the learning‐based system structure, and resultant confidence measures in mission success. We provide insights, in this article, into our overall goal and progress toward developing a framework for evaluation through an understanding of equivalence of testing.more » « less
-
Abstract Heuristics are essential for addressing the complexities of engineering design processes. The goodness of heuristics is context-dependent. Appropriately tailored heuristics can enable designers to find good solutions efficiently, and inappropriate heuristics can result in cognitive biases and inferior design outcomes. While there have been several efforts at understanding which heuristics are used by designers, there is a lack of normative understanding about when different heuristics are suitable. Towards addressing this gap, this paper presents a reinforcement learning-based approach to evaluate the goodness of heuristics for three sub-problems commonly faced by designers: (1) learning the map between the design space and the performance space, (2) acquiring sequential information, and (3) stopping the information acquisition process. Using a multi-armed bandit formulation and simulation studies, we learn the suitable heuristics for these individual sub-problems under different resource constraints and problem complexities. Additionally, we learn the optimal heuristics for the combined problem (i.e., the one composing all three sub-problems), and we compare them to ones learned at the sub-problem level. The results of our simulation study indicate that the proposed reinforcement learning-based approach can be effective for determining the quality of heuristics for different problems, and how the effectiveness of the heuristics changes as a function of the designer’s preference (e.g., performance versus cost), the complexity of the problem, and the resources available.more » « less
-
The socio-technical perspective on engineering system design emphasizes the mutual dynamics between interdisciplinary interactions and system design outcomes. How different disciplines interact with each other depends on technical factors such as design interdependence and system performance. On the other hand, the design outcomes are influenced by social factors such as the frequency of interactions and their distribution. Understanding this co-evolution can lead to not only better behavioral insights, but also efficient communication pathways. In this context, we investigate how to quantify the temporal influences of social and technical factors on interdisciplinary interactions and their influence on system performance. We present a stochastic network-behavior dynamics model that quantifies the design interdependence, discipline-specific interaction decisions, the evolution of system performance, as well as their mutual dynamics. We employ two datasets, one of student subjects designing an automotive engine and the other of NASA engineers designing a spacecraft. Then, we apply statistical Bayesian inference to estimate model parameters and compare insights across the two datasets. The results indicate that design interdependence and social network statistics both have strong positive effects on interdisciplinary interactions for the expert and student subjects alike. For the student subjects, an additional modulating effect of system performance on interactions is observed. Inversely, the total number of interactions, irrespective of their discipline-wise distribution, has a weak but statistically significant positive effect on system performance in both cases. However, excessive interactions mirrored with design interdependence and inflexible design space exploration reduce system performance. These insights support the case for open organizational boundaries as a way for increasing interactions and improving system performance.more » « less
-
Abstract Industry 4.0 is based on the digitization of manufacturing industries and has raised the prospect for substantial improvements in productivity, quality, and customer satisfaction. This digital transformation not only affects the way products are manufactured but also creates new opportunities for the design of products, processes, services, and systems. Unlike traditional design practices based on system-centric concepts, design for these new opportunities requires a holistic view of the human (stakeholder), artefact (product), and process (realization) dimensions of the design problem. In this paper we envision a “human-cyber-physical view of the systems realization ecosystem,” termed “Design Engineering 4.0 (DE4.0),” to reconceptualize how cyber and physical technologies can be seamlessly integrated to identify and fulfil customer needs and garner the benefits of Industry 4.0. In this paper, we review the evolution of Engineering Design in response to advances in several strategic areas including smart and connected products, end-to-end digital integration, customization and personalization, data-driven design, digital twins and intelligent design automation, extended supply chains and agile collaboration networks, open innovation, co-creation and crowdsourcing, product servitization and anything-as-a-service, and platformization for the sharing economy. We postulate that DE 4.0 will account for drivers such as Internet of Things, Internet of People, Internet of Services, and Internet of Commerce to deliver on the promise of Industry 4.0 effectively and efficiently. Further, we identify key issues to be addressed in DE 4.0 and engage the design research community on the challenges that the future holds.more » « less
-
null (Ed.)Abstract In this study, we focus on crowdsourcing contests for engineering design problems where contestants search for design alternatives. Our stakeholder is a designer of such a contest who requires support to make decisions, such as whether to share opponent-specific information with the contestants. There is a significant gap in our understanding of how sharing opponent-specific information influences a contestant’s information acquisition decision such as whether to stop searching for design alternatives. Such decisions in turn affect the outcomes of a design contest. To address this gap, the objective of this study is to investigate how participants’ decision to stop searching for a design solution is influenced by the knowledge about their opponent’s past performance. The objective is achieved by conducting a protocol study where participants are interviewed at the end of a behavioral experiment. In the experiment, participants compete against opponents with strong (or poor) performance records. We find that individuals make decisions to stop acquiring information based on various thresholds such as a target design quality, the number of resources they want to spend, and the amount of design objective improvement they seek in sequential search. The threshold values for such stopping criteria are influenced by the contestant’s perception about the competitiveness of their opponent. Such insights can enable contest designers to make decisions about sharing opponent-specific information with participants, such as the resources utilized by the opponent towards purposefully improving the outcomes of an engineering design contest.more » « less
-
null (Ed.)Abstract Designers make information acquisition decisions, such as where to search and when to stop the search. Such decisions are typically made sequentially, such that at every search step designers gain information by learning about the design space. However, when designers begin acquiring information, their decisions are primarily based on their prior knowledge. Prior knowledge influences the initial set of assumptions that designers use to learn about the design space. These assumptions are collectively termed as inductive biases. Identifying such biases can help us better understand how designers use their prior knowledge to solve problems in the light of uncertainty. Thus, in this study, we identify inductive biases in humans in sequential information acquisition tasks. To do so, we analyze experimental data from a set of behavioral experiments conducted in the past [1–5]. All of these experiments were designed to study various factors that influence sequential information acquisition behaviors. Across these studies, we identify similar decision making behaviors in the participants in their very first decision to “choose x”. We find that their choices of “x” are not uniformly distributed in the design space. Since such experiments are abstractions of real design scenarios, it implies that further contextualization of such experiments would only increase the influence of these biases. Thus, we highlight the need to study the influence of such biases to better understand designer behaviors. We conclude that in the context of Bayesian modeling of designers’ behaviors, utilizing the identified inductive biases would enable us to better model designer’s priors for design search contexts as compared to using non-informative priors.more » « less
-
Abstract Engineering design involves information acquisition decisions such as selecting designs in the design space for testing, selecting information sources, and deciding when to stop design exploration. Existing literature has established normative models for these decisions, but there is lack of knowledge about how human designers make these decisions and which strategies they use. This knowledge is important for accurately modeling design decisions, identifying sources of inefficiencies, and improving the design process. Therefore, the primary objective in this study is to identify models that provide the best description of a designer’s information acquisition decisions when multiple information sources are present and the total budget is limited. We conduct a controlled human subject experiment with two independent variables: the amount of fixed budget and the monetary incentive proportional to the saved budget. By using the experimental observations, we perform Bayesian model comparison on various simple heuristic models and expected utility (EU)-based models. As expected, the subjects’ decisions are better represented by the heuristic models than the EU-based models. While the EU-based models result in better net payoff, the heuristic models used by the subjects generate better design performance. The net payoff using heuristic models is closer to the EU-based models in experimental treatments where the budget is low and there is incentive for saving the budget. This indicates the potential for nudging designers’ decisions toward maximizing the net payoff by setting the fixed budget at low values and providing monetary incentives proportional to saved budget.more » « less
An official website of the United States government
